DistilHuBERT is a lightweight speech representation learning model achieved through hierarchical distillation of the HuBERT model, significantly reducing model size and computational costs while maintaining performance.
Speech Recognition
Transformers English